Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update Results Guidelines for unverified claims and misrepresentation of verified results #137

Open
wants to merge 12 commits into
base: master
Choose a base branch
from

Conversation

nv-rborkar
Copy link
Contributor

It is encouraged to use published official MLPerf scores for comparisons. If deriving unofficial results for competitor products or services, one should use that submitter's code repository, if it exists, instead of reference code in the spirit of making a good faith best effort.

Also added timeline (24 hours) for taking down content which violates results guidelines.

@github-actions
Copy link

github-actions bot commented Nov 16, 2022

MLCommons CLA bot All contributors have signed the MLCommons CLA ✍️ ✅

@tjablin
Copy link
Collaborator

tjablin commented Dec 6, 2022

WG: Some concerns about whether 24 hours is achievable for all cases. Some questions about business days versus calendar days. Some questions about whose business day for national holidays. Some ambiguity about the definition of competitor. Some question about applying rule to submissions with performance critical components from multiple vendors. NVIDIA accelerator, AMD CPU which repository to use? How to handle disputes about 'good faith?'

@bitfort
Copy link
Contributor

bitfort commented Dec 18, 2022

Flagging that I want a chance to speak more to this PR before we move forward deciding to merge or not.

@TheKanter
Copy link
Contributor

TheKanter commented Dec 18, 2022

  1. This seems like a marketing discussion. Are technical WGs the right venue to resolve?
  2. I don't believe MLC has the resources to support a 24h enforcement or resolution loop. Any timeline needs to realistically incorporate: time for MLC to review alleged violations, time to decide on appropriate action, time to contact alleged violator, time to hear back from alleged violator, etc. all limited by MLC resources and as TJ indicated we would also need to take into account holidays etc.
  3. Suppose a hardware platform has been submitted using SW stack X. Someone wishes to make a comparison of the hardware platform with SW stack Y (Y != X). How would this rule apply in that scenario?
  4. Suppose someone wants to make an unofficial comparison specifically using the reference models for intellectually valid reasons?
  5. Does this imply that reference implementations are intrinsically inferior or less relevant to other implementations?
  6. I would like to understand in plain English what this rule is attempting to accomplish.

Copy link
Contributor

@bitfort bitfort left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think making sure people make good comparisons is important. I'd like to better understand and discuss the implications this has for "out of the box" and other comparisons. I could see some interesting comparisons that would be excluded by this as written.

I also think we could explore additional avenues to address this concern - for example better documenting the references so people don't accidentally compare using them without understanding the implications of doing that.

@nv-rborkar
Copy link
Contributor Author

Thanks @tjablin , @bitfort , @TheKanter for the questions & feedback.
I have modified the language to better answer some of the questions above.

  • The primary goal is to prohibit misuse of reference code to derive unofficial scores for a submitter who already has an official submission score. Reference implementations often focus more on readability or explain-ability than performance.

  • I do agree we want to encourage SW innovation instead of prohibiting any interesting comparisons and have modified language to allow that. One cannot equate current reference implementation to "out-of-box" without defining the box constraints or ensuring the box has integration for different sw/hw stacks (references are mostly tied to a single reference hw platform).

@TheKanter what would you propose as a sensitive but practical timeline for taking down disputed content? Note reposting w/ a fix can take longer but we should try to have a timeline for taking down violatory content as media hit cycles are short & matter only in the first few days.

Note in the WG we will also be asking participants to get this reviewed by their respective marketing representatives.

@bitfort
Copy link
Contributor

bitfort commented Feb 9, 2023

Thanks for the clarification - my question here:

It is my understanding that under MLPerf rules, it is legal for anyone to submit on anyone else hardware and software, right? For example, Facebook could submit a Tensorflow + TPU submission. Google could submit a PyTorch + Intel submission.

Facebook wouldn't be required to use Google's preferred or perviously submitted models; Google wouldn't be required to use any preferred software or models from Intel.

If I can submit submit a new score which isn't based on a previous "official submission" - how come my unverified scores need to be based on a previous "official submission"?

@bitfort
Copy link
Contributor

bitfort commented Feb 9, 2023

For example, to be more concrete, suppose I thought that Company X's optimizations didn't reflect what users do in practice. So I wanted to publish a blog post saying "Company X's MLPerf results if they wrote the code the way real researchers write the code" where I give an unverified MLPerf using a different implementation of Resnet50 than the one google has previously submitted. This seems like a fair point to make - because I'm saying "No one in practice writes their models like Company X writes their models for MLPerf - he's what it would really look like."

While Company X may not like what I have to say, it isn't clear that they should be able to order me to take it down. My reading of this PR would allow them to take it down because I didn't use their previous official submission.

As long as I am following the rules of MLPerf, I think I should be allowed to share a different opinion on what is important to users. If I think a different implementation is a better reflection of what users value - what really matters is if I'm following the rules, not if people link what I have to say.

If someone can show that my unverified results are not following MLPerf rules - perhaps then a takedown may be considered in that case. But the requirement to use an "official submission" seems to stifle legitimate disagreement.

I think we may want to find a way to adjust the language to separate "alternative perspectives & fair criticism" from "misleading and incorrect."

Also - I may be misunderstanding here - please do correct me.

@nv-rborkar
Copy link
Contributor Author

Good questions w/ examples. Yes there is a misunderstanding :)

If I can submit submit a new score which isn't based on a previous "official submission" - how come my unverified scores need to be based on a previous "official submission"?

The updated language doesn't require the unverified score to be based on previous official submission. It requires you to "also" show the official submission score along with your unverified score in comparisons (eg blog, charts etc). This could also be good incentive for SW innovators to beat SOTA MLPerf performance.

While Company X may not like what I have to say, it isn't clear that they should be able to order me to take it down. My reading of this PR would allow them to take it down because I didn't use their previous official submission.

In this example, one wouldn't be ordered to take the opinion down as long as the comparison with unverified score transparently mentions:

  • (as per existing rules) that it is "unverified", model differences if any (ie closed/open, system diff etc) other differences
  • (as per this PR) Google's official score in comparisons

For example, company X has an official score on HW platform A with SW stack 1.
I have a new SW stack 2 which is easier to program, quicker development cycles.
I am allowed to make an unverified claim of A's performance on 2 as long as in the comparison (chart, blog), I also show what A achieved officially w/ SW stack 1.

  • if 1 is faster than 2: we do not penalize A's best capability but also get the opportunity to present 2's usability & advatantage for practitioners
  • if 2 is faster than 1: originator of the claim would obviously want to showcase this comparison and is allowed by all means

@TheKanter
Copy link
Contributor

TheKanter commented Feb 15, 2023

@nv-rborkar I'd like to discuss how other benchmarks handle this in private.

@nv-rborkar
Copy link
Contributor Author

nv-rborkar commented Feb 15, 2023

@bitfort , @TheKanter I have modified the PR

  • No regulation on SW for unverified scores.
  • Requirement to state official scores if one exists in comparisons
  • There should be some timeline to take down violating content. @TheKanter could you propose a deadline? Note resolving & pushing a fix can take more time but taking down violations should be as soon as reasonably possible.

Also would you like to join the Training WG meeting on 2/16 to discuss this more?

@nv-rborkar
Copy link
Contributor Author

03/09/2023 Training WG update:
Updated PR to incorporate @drcanchi 's feedback to include that official scores of the "closest available submission" must be stated in comparisons.
AI on Intel to provide feedback on deadline for addressing violations

@TheKanter
Copy link
Contributor

TheKanter commented Mar 10, 2023

RE: Timeline for complaints and violations

Generally the flow should look like:

  1. Someone notices an improper comparison or other violation by X
  2. Someone notifies MLC of violation
  3. MLC determines violation is in fact a problem
  4. MLC sends out a notification letter to X
  5. X fixes the violation

Current guidance is that MLC can send out a letter requesting a fix within 96h for step 5. We should figure out a schedule for step 3 & 4 that is realistic given MLC resources.

@drcanchi
Copy link

Regarding step 4, the feedback I got from internal discussions is to set timeline in working/business days rather than hours.

@TheKanter
Copy link
Contributor

TheKanter commented Mar 10, 2023 via email

Removed violation addressing deadline from this PR as it would be more productive to discuss it in a separate PR.
Added language to disallow misrepresentation of performance from 3rd party submitters.
@nv-rborkar
Copy link
Contributor Author

Removed deadline for handling violations from this PR as it would be more productive to discuss it on a separate PR.

Addressing only mispresentation of reference performance or 3rd party performance in this PR. Please review.

arjunsuresh
arjunsuresh previously approved these changes Apr 19, 2023
@@ -202,7 +206,7 @@ The MLPerf mark is owned by MLCommons, and only MLCommons and its authorized lic

Any MLCommons Member, Test Partner, or third party may report a violation of these Guidelines via email to the MLCommons Executive Director (“ED”) & Working Group (“WG”) chairs of the appropriate benchmark. Upon confirming the violation in their discretion, ED & WG chairs would inform the potential violator and request remedial action. If the ED, WG chairs, and potential violator are unable to reach a mutually satisfactory conclusion, the issue can be raised in WG to seek resolution via WG vote.

A non-exhaustive list of possible remedial actions or penalties based on the degree of violation is noted below. Taking or not taking any or all actions on this list or otherwise does not constitute a waiver of any enforcement rights or other rights in the MLPerf benchmarks, software, and/or trademark.
Violating content must be taken down within first 48 hours of violation being reported. A non-exhaustive list of possible remedial actions or penalties based on the degree of violation is noted below. Taking or not taking any or all actions on this list or otherwise does not constitute a waiver of any enforcement rights or other rights in the MLPerf benchmarks, software, and/or trademark.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this be changed to within 2 business days? Also violation should be agreed upon by the relevant WG as not all reports need to be valid, right?

@TheKanter
Copy link
Contributor

Hrmm I see a change that is singling out IHVs, I don't think that's a good idea. This thing has been lurking around for a long time. Is there a way we can find some non-contentious subsets and commit those? E.g., the 4 day notice should be updated. @nv-rborkar

@nv-rborkar
Copy link
Contributor Author

If you click on the "Files" diff at the top, it will show the real diff where notice period clause is removed from this PR to simplify issues being discussed.

Thanks for feedback @TheKanter about singling out IHVs - modified language.

@TheKanter
Copy link
Contributor

@nv-rborkar let's discuss some of the ideas here offline. I'm not sure I understand the goals.

@mrasquinha-g
Copy link
Contributor

LGTM from the Inference Working group.

@TheKanter
Copy link
Contributor

We have separated out the timeline one into a PR that has been accepted, this now focuses solely on allowed comparisons.

@@ -12,6 +12,10 @@ If you used an MLPerf™ benchmark to obtain a result for your product or servic

Since your results have not gone through MLCommons review and, therefore, have not been verified by MLCommons, you must indicate your results are not verified by using the term “unverified” next to each result score and by using the following language when publishing or otherwise discussing your results: “_Result not verified by MLCommons Association._” You can include this statement in a footnote, as described in Section 3 below.

If the components (e.g. HW) that substantially determine ML performance of an "unverified" score also have a verified official score (e.g. same HW with a different submission SW stack), it is required to state the official submission score of the closest available system in any public comparisons.

Note that benchmark reference code implementations often prioritize readability and should not be mistaken for performance optimized implementations (e.g. reference code performance should not be equated with "out-of-box" performance).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"reference code performance should not be equated with "out-of-box" performance" - we may want to remove this since it's possibly opinionated. I could see people making the argument that readable code is out of the box code.

Perhaps we could say, "It is not recommended to use the reference implementations to compare performance between systems." and not go into future details?

Copy link
Contributor

@DilipSequeira DilipSequeira May 4, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reference code is not typical of what you would deploy in production, and deliberately so, as all readability/perf tradeoffs are made in favor of readability - e.g. may be written for batch 1. If we want people to be able to use reference code to characterize stack performance "out of the box", and want such comparisons not to be misleading, then the reference code needs to be designed with that in mind, and I think that'd be a net loss. Submitters should not be reviewing reference code with the consideration "if someone runs this on my stack, is it remotely reflective of production performance?"

Also, I'm not sure of the value of recommending best practices in this document. Folks using MLPerf reference implementations to present comparisons either have a scientific intent, and will understand what they're measuring and carefully describe their methodology, or they have a marketing intent and will present their product in the best light allowed by MLCommons. One of the goals of the ToU is to minimize the level of misrepresentation by the latter group.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would agree if you're recommending to remove this completely:

"Note that benchmark reference code implementations often prioritize readability and should not be mistaken for performance optimized implementations (e.g. reference code performance should not be equated with "out-of-box" performance)."

Copy link
Contributor

@DilipSequeira DilipSequeira May 4, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Possibly a better framing of this would be something like

"Where the performance of a reference implementation is used, the main text or figure must state that the reference implementation is unoptimized and not reflective of performance in production."

@@ -77,8 +81,8 @@ MLPerf results may not be compared against non-MLPerf results. For example, an M

== When comparing MLPerf results, you must identify any submission differences

When comparing results the main text, table, or figure must clearly identify any difference in version, division, category, verified or unverified status, scenario or chip count (count of the compute devices executing the largest number of ops, which could be processors or accelerators). When comparing Open and Closed division results, any ways in which the Open result would not qualify as a Closed result must be identified.

When comparing results the main text, table, or figure must clearly identify any difference in version, division, category, verified or unverified status, scenario or chip count (count of the compute devices executing the largest number of ops, which could be processors or accelerators) or submitter name. When comparing Open and Closed division results, any ways in which the Open result would not qualify as a Closed result must be identified. When making comparisons, submissions must not be portrayed as representing the performance available from a submitter unless the submission was by that submitter - e.g. a submission by SuperServers Inc that happened to use an accelerator from AccelCorp Ltd must not be portrayed as representing AccelCorp's performance.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"scenario, chip count" (remove "or").

@@ -77,8 +81,8 @@ MLPerf results may not be compared against non-MLPerf results. For example, an M

== When comparing MLPerf results, you must identify any submission differences

When comparing results the main text, table, or figure must clearly identify any difference in version, division, category, verified or unverified status, scenario or chip count (count of the compute devices executing the largest number of ops, which could be processors or accelerators). When comparing Open and Closed division results, any ways in which the Open result would not qualify as a Closed result must be identified.

When comparing results the main text, table, or figure must clearly identify any difference in version, division, category, verified or unverified status, scenario or chip count (count of the compute devices executing the largest number of ops, which could be processors or accelerators) or submitter name. When comparing Open and Closed division results, any ways in which the Open result would not qualify as a Closed result must be identified. When making comparisons, submissions must not be portrayed as representing the performance available from a submitter unless the submission was by that submitter - e.g. a submission by SuperServers Inc that happened to use an accelerator from AccelCorp Ltd must not be portrayed as representing AccelCorp's performance.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"When making comparisons, submissions must not be portrayed as representing the performance available from a submitter unless the submission was by that submitter - e.g. a submission by SuperServers Inc that happened to use an accelerator from AccelCorp Ltd must not be portrayed as representing AccelCorp's performance"

This seems to be a pretty fundamental change. For example, this means I cannot submit an Apple phone benchmark and make claims around what performance Apple provides -- unless I work for Apple?

Copy link
Contributor

@bitfort bitfort May 3, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we add qualifying language here - specific "UNVERIFIED RESULTS" ?

"When making comparisons, UNVERIFIED RESULTS must not be portrayed as representing the performance available from a submitter unless the submission was by that submitter - e.g. a UNVERIFIED RESULT by SuperServers Inc that happened to use an accelerator from AccelCorp Ltd must not be portrayed as representing AccelCorp's performance"

Copy link
Contributor

@DilipSequeira DilipSequeira May 3, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you want to claim that's the performance your submission achieved when you ran your software stack on Apple hardware, that's fine, but if you used a third-party stack on a 2018 iPhone, it'd be misrepresentation to claim that's Apple's performance even if that's a verified result.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this asks the bigger question though: is MLPerf a Hardware Benchmark, or a Hardware+Software Benchmark.

For example, suppose my claim is: "We considered the following 3 software stacks for the hardware, A, B, and vendor. Our requirements prevent us from using the Vendor stack and B stack, so we had to use A stack. Therefore, comparing on software stack A, this is the performance of the Hardware X"

An example of this could be TF versus PT on TPUs. It wouldn't be fair to for Google to claim TF is the only one that counts, not everyone wants to use TF.

Copy link
Contributor

@bitfort bitfort May 3, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think my question is; how can we prevent someone like Google dictating which framework we need to use to be "official"? Because they clearly have a conflict there.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it protects hardware more than software. The intent is to e.g. protect Google from being compared against using a cherry-picked usage of TPU somewhere that they have no control over. The same applies to software: if you want to claim that your submission is faster than another submission using ONNX-RT on T4, fine, but you should not claim you've thereby beaten Microsoft or Nvidia.

Because we want MLPerf to have third party submissions on commodity stacks, but at the same time limit the potential for misleading claims based on those submissions.

I appreciate that the example is specific to hardware, and perhaps we should change it to clarify that it also includes software.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also - if we're adding language not scoped to "unverified results", should we change the title? (the title implies this is only unverified claims).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that makes sense - @nv-rborkar ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also - asked a question offline to clarify my understanding. I think I may be missing something in reading this.

Copy link
Contributor

@bitfort bitfort May 3, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And thanks @DilipSequeira and @nv-rborkar for keeping through all these questions. You both have been very patient in working through 6 months of discussion :)

@nv-rborkar nv-rborkar changed the title Update Results Guidelines for unverified claims Update Results Guidelines for unverified claims and misrepresentation of verified results May 3, 2023
@mrmhodak
Copy link
Contributor

mrmhodak commented May 4, 2023

This PR mentions the reference code as though it would be the only way to generate unofficial results, but there are other ways to run: ck and MLCube. In fact, the goal of those tools is to make it MLPerf easy to run.

This PR puts an extra burden on those efforts. Is this even needed, we already have a disclaimer that those results have not been verified and thus are unofficial.

Plus there is no definition what a "verified official score" actually is - presumably a submission by a chip manufacturer? This works for Nvidia and Intel, but what about results where a 3rd party submission is the only one - does that make an official score?

@TheKanter
Copy link
Contributor

TheKanter commented May 4, 2023 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants